Skip to content

test(examples): Add some examples leveraging pydantic-AI and other chatlas alternatives #66

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 31 commits into
base: main
Choose a base branch
from

Conversation

karangattu
Copy link

@karangattu karangattu commented Jun 3, 2025

This pull request adds the following examples in shinychat

PydanticAI

  • Basic streaming app
  • App using tool calling
  • App using structured output
  • Data Science Adventure
  • Workout plan app

LlamaIndex

  • Basic streaming app
  • App using tool calling
  • App using structured output
  • Real world example of RAG using chatlas

llm

  • Basic streaming app
  • App using tool calling
  • App using structured output

Langchain

  • Basic streaming app
  • App using tool calling
  • App using structured output

@karangattu karangattu requested a review from gadenbuie June 3, 2025 22:48
@karangattu karangattu changed the title test(examples): Add some examples leveraging pydantic-AI test(examples): Add some examples leveraging pydantic-AI and other chatlas alternatives Jun 18, 2025
@karangattu karangattu requested a review from cpsievert June 18, 2025 07:38
Comment on lines 38 to 42
# An async generator function to stream the response from the Pydantic AI agent
async def pydantic_stream_generator(user_input: str):
async with chat_client.run_stream(user_input) as result:
async for chunk in result.stream_text(delta=True):
yield chunk
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just realized this won't retain message history. I think the simplest way to do that now is supposed to be something like:

from pydantic_ai import Agent
from pydantic_ai.messages import ModelMessage

def retain_messages(
    messages: "list[ModelMessage]",
) -> "list[ModelMessage]":
    return messages

chat_client = Agent(
    "openai:o4-mini",
    system_prompt="You are a helpful assistant.",
    history_processors=[retain_messages],
)

Unfortunately though, that is leading to an error for me. Does it work for you? Seems like we might want to report it if that doesn't in fact work

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, if I copy what you shared in the code, it throws an error shiny.types.NotifyException: Error in Chat('chat'): name 'ModelMessage' is not defined

import os

from dotenv import load_dotenv
from pydantic_ai import Agent
from pydantic_ai.messages import ModelMessage
from shiny.express import ui

_ = load_dotenv()


def retain_messages(
    messages: "list[ModelMessage]",
) -> "list[ModelMessage]":
    return messages


chat_client = Agent(
    "openai:gpt-4.1-nano-2025-04-14",
    system_prompt="You are a helpful assistant.",
    history_processors=[retain_messages],
)


# Set some Shiny page options
ui.page_opts(
    title="OpenAI with Pydantic AI",
    fillable=True,
    fillable_mobile=True,
)


# Create and display a Shiny chat component
chat = ui.Chat(
    id="chat",
    messages=[
        "Hello! I am an assistant powered by Pydantic AI. Ask me anything you'd like to know or do.",
    ],
)
chat.ui()


# Generate a response when the user submits a message
@chat.on_user_submit
async def handle_user_input(user_input: str):
    stream = pydantic_stream_generator(user_input)
    await chat.append_message_stream(stream)


# An async generator function to stream the response from the Pydantic AI agent
async def pydantic_stream_generator(user_input: str):
    async with chat_client.run_stream(user_input) as result:
        async for chunk in result.stream_text(delta=True):
            yield chunk

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that said, I got a version of the example you suggested working by storing the conversation history as a reactive Val

from typing import List

from dotenv import load_dotenv
from pydantic_ai import Agent
from pydantic_ai.messages import ModelMessage
from shiny import reactive
from shiny.express import ui

_ = load_dotenv()
chat_client = Agent(
    "openai:gpt-4.1-nano-2025-04-14",
    system_prompt="You are a helpful assistant.",
)

conversation_history = reactive.value(list[ModelMessage]([]))

ui.page_opts(
    title="Hello OpenAI Chat",
    fillable=True,
    fillable_mobile=True,
)

chat = ui.Chat(
    id="chat",
    messages=["Hello! How can I help you today?"],
)
chat.ui()


@chat.on_user_submit
async def handle_user_input(user_input: str):
    current_history = conversation_history.get()
    stream = pydantic_stream_generator(user_input, current_history)
    await chat.append_message_stream(stream)


async def pydantic_stream_generator(
    user_input: str, current_history: List[ModelMessage]
):
    message_history = current_history if current_history else None
    async with chat_client.run_stream(
        user_input, message_history=message_history
    ) as result:
        async for chunk in result.stream_text(delta=True):
            yield chunk
        conversation_history.set(result.all_messages())

@karangattu karangattu requested a review from cpsievert July 1, 2025 10:50
karangattu and others added 5 commits July 15, 2025 08:46
Standardizes chat UI initialization by moving message setup into chat.ui() calls and enhances assistant messages with suggestions and clearer instructions across multiple Playwright chat test apps. Also updates imports and agent usage for consistency and clarity.
Moved the import of Context to group it with related imports, improving code readability and maintaining a consistent import order.
@karangattu karangattu requested a review from cpsievert July 15, 2025 04:40
@cpsievert
Copy link
Collaborator

Thanks, these examples are looking good!

pkg-py/tests/playwright doesn't really feel like the proper final destination for them, though. Ideally, they'll end up being more discoverable.

One idea to help with this would be to create a "framework playground app", that has:

  • An input_select() to choose the framework. (e.g., pydantic AI, LangChain, etc)
  • An input_select() to choose the example "flavor" (i.e., basic, tool calling, etc)
  • An <iframe> (or equivalent) displaying the relevant example app.
  • A link to the source code behind the example.

And, once we have that, we could have a new template under https://shiny.posit.co/py/templates/#generative-ai for the playground app. I think it'd also make sense for each example app to be a "template" within https://github.com/posit-dev/py-shiny-templates/tree/main/gen-ai. That way, we could leverage the infrastructure in that repo for deployment.

Does that seems like something you'd have the bandwidth to take on? It's not something I'd view as super urgent to deliver on, but would be great to have some slow steady progress on.

@karangattu
Copy link
Author

Thanks, these examples are looking good!

pkg-py/tests/playwright doesn't really feel like the proper final destination for them, though. Ideally, they'll end up being more discoverable.

One idea to help with this would be to create a "framework playground app", that has:

  • An input_select() to choose the framework. (e.g., pydantic AI, LangChain, etc)
  • An input_select() to choose the example "flavor" (i.e., basic, tool calling, etc)
  • An <iframe> (or equivalent) displaying the relevant example app.
  • A link to the source code behind the example.

And, once we have that, we could have a new template under https://shiny.posit.co/py/templates/#generative-ai for the playground app. I think it'd also make sense for each example app to be a "template" within https://github.com/posit-dev/py-shiny-templates/tree/main/gen-ai. That way, we could leverage the infrastructure in that repo for deployment.

Does that seems like something you'd have the bandwidth to take on? It's not something I'd view as super urgent to deliver on, but would be great to have some slow steady progress on.

Sure, I'll work on this or next week.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants